Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Cooperative perception (CP) extends detection range and situational awareness in connected and autonomous vehicles by aggregating information from multiple agents. However, attackers can inject fabricated data into shared messages to achieve adversarial attacks. While prior defenses detect object spoofing, object removal attacks remain a serious threat. Nevertheless, prior attacks require unnaturally large perturbations and rely on unrealistic assumptions such as complete knowledge of participant agents, which limits their attack success. In this paper, we present SOMBRA, a stealthy and practical object removal attack exploiting the attentive fusion mechanism in modern CP algorithms. SOMBRA achieves 99% success in both targeted and mass object removal scenarios (a 90%+ improvement over prior art) with less than 1% perturbation strength and no knowledge of benign agents other than the victim. To address the unique vulnerabilities of attentive fusion within CP, we propose LUCIA, a novel trustworthiness-aware attention mechanism that proactively mitigates adversarial features. LUCIA achieves 94.93% success against targeted attacks, reduces mass removal rates by over 90%, restores detection to baseline levels, and lowers defense overhead by 300x compared to prior art. Our contributions set a new state-of-the-art for adversarial attacks and defenses in CP.more » « less
-
VehiGAN : Generative Adversarial Networks for Adversarially Robust V2X Misbehavior Detection SystemsVehicle-to-Everything (V2X) communication enables vehicles to communicate with other vehicles and roadside infrastructure, enhancing traffic management and improving road safety. However, the open and decentralized nature of V2X networks exposes them to various security threats, especially misbehaviors, necessitating a robust Misbehavior Detection System (MBDS). While Machine Learning (ML) has proved effective in different anomaly detection applications, the existing ML-based MBDSs have shown limitations in generalizing due to the dynamic nature of V2X and insufficient and imbalanced training data. Moreover, they are known to be vulnerable to adversarial ML attacks. On the other hand, Generative Adversarial Networks (GAN) possess the potential to mitigate the aforementioned issues and improve detection performance by synthesizing unseen samples of minority classes and utilizing them during their model training. Therefore, we propose the first application of GAN to design an MBDS that detects any misbehavior and ensures robustness against adversarial perturbation. In this article, we present several key contributions. First, we propose an advanced threat model for stealthy V2X misbehavior where the attacker can transmit malicious data and mask it using adversarial attacks to avoid detection by ML-based MBDS. We formulate two categories of adversarial attacks against the anomaly-based MBDS. Later, in the pursuit of a generalized and robust GAN-based MBDS, we train and evaluate a diverse set of Wasserstein GAN (WGAN) models and presentVehicularGAN(VehiGAN), an ensemble of multiple top-performing WGANs, which transcends the limitations of individual models and improves detection performance. We present a physics-guided data preprocessing technique that generates effective features for ML-based MBDS. In the evaluation, we leverage the state-of-the-art V2X attack simulation tool VASP to create a comprehensive dataset of V2X messages with diverse misbehaviors. Evaluation results show that in 20 out of 35 misbehaviors,VehiGANoutperforms the baseline and exhibits comparable detection performance in other scenarios. Particularly,VehiGANexcels in detecting advanced misbehaviors that manipulate multiple fields in V2X messages simultaneously, replicating unique maneuvers. Moreover,VehiGANprovides approximately 92% improvement in false positive rate under powerful adaptive adversarial attacks, and possesses intrinsic robustness against other adversarial attacks that target the false negative rate. Finally, we make the data and code available for reproducibility and future benchmarking, available athttps://github.com/shahriar0651/VehiGAN.more » « less
-
Abstract—Multi-Object Tracking (MOT) is a critical task in computer vision, with applications ranging from surveillance systems to autonomous driving. However, threats to MOT algorithms have yet been widely studied. In particular, incorrect association between the tracked objects and their assigned IDs can lead to severe consequences, such as wrong trajectory predictions. Previous attacks against MOT either focused on hijacking the trackers of individual objects, or manipulating the tracker IDs in MOT by attacking the integrated object detection (OD) module in the digital domain, which are model-specific, non-robust, and only able to affect specific samples in offline datasets. In this paper, we present ADVTRAJ, the first online and physical ID-manipulation attack against tracking-by-detection MOT, in which an attacker uses adversarial trajectories to transfer its ID to a targeted object to confuse the tracking system, without attacking OD. Our simulation results in CARLA show that ADVTRAJ can fool ID assignments with 100% success rate in various scenarios for white-box attacks against SORT, which also have high attack transferability (up to 93% attack success rate) against state-of-the-art (SOTA) MOT algorithms due to their common design principles. We characterize the patterns of trajectories generated by ADVTRAJ and propose two universal adversarial maneuvers that can be performed by a human walker/driver in daily scenarios. Our work reveals under-explored weaknesses in the object association phase of SOTA MOT systems, and provides insights into enhancing the robustness of such systemsmore » « less
-
Camera-based perception is a central component to the visual perception of autonomous systems. Recent works have investigated latency attacks against perception pipelines, which can lead to a Denial-of-Service against the autonomous system. Unfortunately, these attacks lack real-world applicability, either relying on digital perturbations or requiring large, unscalable, and highly visible patches that cover up the victim's view. In this paper, we propose Detstorm, a novel physically realizable latency attack against camera-based perception. Detstorm uses projector perturbations to cause delays in perception by creating a large number of adversarial objects. These objects are optimized on four objectives to evade filtering by multiple Non-Maximum Suppression (NMS) approaches. To maximize the number of created objects in a dynamic physical environment, Detstorm takes a unique greedy approach, segmenting the environment into “zones” containing distinct object classes and maximizing the number of created objects per zone. Detstorm adapts to changes in the environment in real time, recombining perturbation patterns via our zone stitching process into a contiguous, physically projectable image. Evaluations in both simulated and real-world experiments show that Detstorm causes a 506% increase in detected objects on average, delaying perception results by up to 8.1 seconds, and capable of causing physical consequences on real-world autonomous driving systems.more » « less
-
Privacy engineering encompasses various methodologies and tools, including privacy strategies and privacy patterns, aimed at achieving systems that inherently respect privacy. Despite the collection of numerous privacy patterns, their practical application remains under-explored. This paper investigates the applicability of privacy patterns in the context of robotaxis, a use case in the broader Mobility-as-a-Service (MaaS) ecosystem. Using the LINDDUN framework for privacy threat elicitation, we analyze existing privacy patterns to address identified privacy threats. Our findings reveal challenges in applying these patterns due to inconsistencies and a lack of guidance, as well as a lack of suitable privacy patterns for addressing several privacy threats. To fill the gaps, we propose ideas for new privacy patterns.more » « less
An official website of the United States government

Full Text Available